Goto

Collaborating Authors

 Nassau



Fully Decentralized Policies for Multi-Agent Systems: An Information Theoretic Approach

Roel Dobbe, David Fridovich-Keil, Claire Tomlin

Neural Information Processing Systems

Learning cooperative policies for multi-agent systems is often challenged by partial observability and a lack of coordination. In some settings, the structure of a problem allows a distributed solution with limited communication. Here, we consider a scenario where no communication is available, and instead we learn local policies for all agents that collectively mimic the solution to a centralized multi-agent static optimization problem. Our main contribution is an information theoretic framework based on rate distortion theory which facilitates analysis of how well the resulting fully decentralized policies are able to reconstruct the optimal solution. Moreover, this framework provides a natural extension that addresses which nodes an agent should communicate with to improve the performance of its individual policy.


Code2Snapshot: Using Code Snapshots for Learning Representations of Source Code

Rabin, Md Rafiqul Islam, Alipour, Mohammad Amin

arXiv.org Artificial Intelligence

There are several approaches for encoding source code in the input vectors of neural models. These approaches attempt to include various syntactic and semantic features of input programs in their encoding. In this paper, we investigate Code2Snapshot, a novel representation of the source code that is based on the snapshots of input programs. We evaluate several variations of this representation and compare its performance with state-of-the-art representations that utilize the rich syntactic and semantic features of input programs. Our preliminary study on the utility of Code2Snapshot in the code summarization and code classification tasks suggests that simple snapshots of input programs have comparable performance to state-of-the-art representations. Interestingly, obscuring input programs have insignificant impacts on the Code2Snapshot performance, suggesting that, for some tasks, neural models may provide high performance by relying merely on the structure of input programs.


An ornithologist, a cellist and a human rights activist: the 2022 MacArthur Fellows

NPR Technology

This year's 25 MacArthur Fellows will each receive $800,000, a "no-strings-attached award to extraordinarily talented and creative individuals as an investment in their potential," according to the MacArthur Foundation website. John D. and Catherine T. MacArthur Foundation hide caption This year's 25 MacArthur Fellows will each receive $800,000, a "no-strings-attached award to extraordinarily talented and creative individuals as an investment in their potential," according to the MacArthur Foundation website. It is perhaps the most coveted award in academia, the arts and sciences. You can't get nominated and the pool of candidates is a tightly-held secret. This year's 25 MacArthur Fellows will each receive $800,000, a "no-strings-attached award to extraordinarily talented and creative individuals as an investment in their potential," according to the MacArthur Foundation website.


Can Artificial Intelligence Increase Our Morality?

#artificialintelligence

In discussions of AI ethics, there's a lot of talk of designing "ethical" algorithms, those that produce behaviors we like. People have called for software that treats people fairly, that avoids violating privacy, that cedes to humanity decisions about who should live and die. But what about AI that benefits humans' morality, our own capacity to behave virtuously? That's the subject of a talk on "AI and Moral Self-Cultivation" given last week by Shannon Vallor, a philosopher at Santa Clara University who studies technology and ethics. The talk was part of a meeting on "Character, Social Connections and Flourishing in the 21st Century," hosted by Templeton World Charity Foundation, in Nassau, The Bahamas.


Can Artificial Intelligence Increase Our Morality?

#artificialintelligence

In discussions of AI ethics, there's a lot of talk of designing "ethical" algorithms, those that produce behaviors we like. People have variously called for software that treats people fairly, that avoids violating privacy, that cedes to humanity decisions about who should live and die. But what about AI that benefits humans' morality, our own capacity to behave virtuously? That's the subject of a talk on "AI and Moral Self-Cultivation" given last week by Shannon Vallor, a philosopher at Santa Clara University who studies technology and ethics. The talk was part of a meeting on "Character, Social Connections and Flourishing in the 21st Century," hosted by Templeton World Charity Foundation, in Nassau, The Bahamas.


Fully Decentralized Policies for Multi-Agent Systems: An Information Theoretic Approach

Dobbe, Roel, Fridovich-Keil, David, Tomlin, Claire

Neural Information Processing Systems

Learning cooperative policies for multi-agent systems is often challenged by partial observability and a lack of coordination. In some settings, the structure of a problem allows a distributed solution with limited communication. Here, we consider a scenario where no communication is available, and instead we learn local policies for all agents that collectively mimic the solution to a centralized multi-agent static optimization problem. Our main contribution is an information theoretic framework based on rate distortion theory which facilitates analysis of how well the resulting fully decentralized policies are able to reconstruct the optimal solution. Moreover, this framework provides a natural extension that addresses which nodes an agent should communicate with to improve the performance of its individual policy.